64 research outputs found

    On the construction of probabilistic Newton-type algorithms

    Full text link
    It has recently been shown that many of the existing quasi-Newton algorithms can be formulated as learning algorithms, capable of learning local models of the cost functions. Importantly, this understanding allows us to safely start assembling probabilistic Newton-type algorithms, applicable in situations where we only have access to noisy observations of the cost function and its derivatives. This is where our interest lies. We make contributions to the use of the non-parametric and probabilistic Gaussian process models in solving these stochastic optimisation problems. Specifically, we present a new algorithm that unites these approximations together with recent probabilistic line search routines to deliver a probabilistic quasi-Newton approach. We also show that the probabilistic optimisation algorithms deliver promising results on challenging nonlinear system identification problems where the very nature of the problem is such that we can only access the cost function and its derivative via noisy observations, since there are no closed-form expressions available

    A Bayesian Filtering Algorithm for Gaussian Mixture Models

    Full text link
    A Bayesian filtering algorithm is developed for a class of state-space systems that can be modelled via Gaussian mixtures. In general, the exact solution to this filtering problem involves an exponential growth in the number of mixture terms and this is handled here by utilising a Gaussian mixture reduction step after both the time and measurement updates. In addition, a square-root implementation of the unified algorithm is presented and this algorithm is profiled on several simulated systems. This includes the state estimation for two non-linear systems that are strictly outside the class considered in this paper

    Arduous implementation: Does the Normalisation Process Model explain why it's so difficult to embed decision support technologies for patients in routine clinical practice

    Get PDF
    Background: decision support technologies (DSTs, also known as decision aids) help patients and professionals take part in collaborative decision-making processes. Trials have shown favorable impacts on patient knowledge, satisfaction, decisional conflict and confidence. However, they have not become routinely embedded in health care settings. Few studies have approached this issue using a theoretical framework. We explained problems of implementing DSTs using the Normalization Process Model, a conceptual model that focuses attention on how complex interventions become routinely embedded in practice.Methods: the Normalization Process Model was used as the basis of conceptual analysis of the outcomes of previous primary research and reviews. Using a virtual working environment we applied the model and its main concepts to examine: the 'workability' of DSTs in professional-patient interactions; how DSTs affect knowledge relations between their users; how DSTs impact on users' skills and performance; and the impact of DSTs on the allocation of organizational resources.Results: conceptual analysis using the Normalization Process Model provided insight on implementation problems for DSTs in routine settings. Current research focuses mainly on the interactional workability of these technologies, but factors related to divisions of labor and health care, and the organizational contexts in which DSTs are used, are poorly described and understood.Conclusion: the model successfully provided a framework for helping to identify factors that promote and inhibit the implementation of DSTs in healthcare and gave us insights into factors influencing the introduction of new technologies into contexts where negotiations are characterized by asymmetries of power and knowledge. Future research and development on the deployment of DSTs needs to take a more holistic approach and give emphasis to the structural conditions and social norms in which these technologies are enacte

    Bayesian versus frequentist statistical inference for investigating a one-off cancer cluster reported to a health department

    Get PDF
    Background. The problem of silent multiple comparisons is one of the most difficult statistical problems faced by scientists. It is a particular problem for investigating a one-off cancer cluster reported to a health department because any one of hundreds, or possibly thousands, of neighbourhoods, schools, or workplaces could have reported a cluster, which could have been for any one of several types of cancer or any one of several time periods. Methods. This paper contrasts the frequentist approach with a Bayesian approach for dealing with silent multiple comparisons in the context of a one-off cluster reported to a health department. Two published cluster investigations were re-analysed using the Dunn-Sidak method to adjust frequentist p-values and confidence intervals for silent multiple comparisons. Bayesian methods were based on the Gamma distribution. Results. Bayesian analysis with non-informative priors produced results similar to the frequentist analysis, and suggested that both clusters represented a statistical excess. In the frequentist framework, the statistical significance of both clusters was extremely sensitive to the number of silent multiple comparisons, which can only ever be a subjective "guesstimate". The Bayesian approach is also subjective: whether there is an apparent statistical excess depends on the specified prior. Conclusion. In cluster investigations, the frequentist approach is just as subjective as the Bayesian approach, but the Bayesian approach is less ambitious in that it treats the analysis as a synthesis of data and personal judgements (possibly poor ones), rather than objective reality. Bayesian analysis is (arguably) a useful tool to support complicated decision-making, because it makes the uncertainty associated with silent multiple comparisons explicit
    • …
    corecore